15 research outputs found

    Translations: effects of viewpoint, feature, naming and context on identifying repeatedly copied drawings

    Get PDF
    We explored the tension between bottom – up and top – down contributions to object recognition in a collaboration between a visual artist and a cognitive psychologist. Initial pictorial renderings of objects and animals from various viewpoints were iteratively copied, and a series of drawings that changed from highly concrete images into highly abstract images was produced. In drawing identification in which sets were shown in reverse order, participants were more accurate, more confident, and quicker to correctly identify the evolving image when it was originally displayed from a canonical viewpoint with all salient features present. In drawing identification in which images were shown in random order, more abstract images could be resolved as a result of previously identifying a more concrete iteration of the same drawing. The results raise issues about the influence of viewpoint and feature on the preservation of pictorial images and about the role of labelling in the interpretation of ambiguous stimuli. In addition, the study highlights a procedure in which visual stimuli can degrade without necessitating a substantial loss of complexity

    Masking of Figure-Ground Texture and Single Targets by Surround Inhibition: A Computational Spiking Model

    Get PDF
    A visual stimulus can be made invisible, i.e. masked, by the presentation of a second stimulus. In the sensory cortex, neural responses to a masked stimulus are suppressed, yet how this suppression comes about is still debated. Inhibitory models explain masking by asserting that the mask exerts an inhibitory influence on the responses of a neuron evoked by the target. However, other models argue that the masking interferes with recurrent or reentrant processing. Using computer modeling, we show that surround inhibition evoked by ON and OFF responses to the mask suppresses the responses to a briefly presented stimulus in forward and backward masking paradigms. Our model results resemble several previously described psychophysical and neurophysiological findings in perceptual masking experiments and are in line with earlier theoretical descriptions of masking. We suggest that precise spatiotemporal influence of surround inhibition is relevant for visual detection

    Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    Get PDF
    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas

    Limits of event-related potential differences in tracking object processing speed

    No full text
    International audienceWe report results from two experiments in which subjects had to categorize briefly presented upright or inverted natural scenes. In the first experiment, subjects decided whether images contained animals or human faces presented at different scales. Behavioral results showed virtually identical processing speed between the two categories and very limited effects of inversion. One type of event-related potential (ERP) comparison, potentially capturing low-level physical differences, showed large effects with onsets at about 150 msec in the animal task. However, in the human face task, those differences started as early as 100 msec. In the second experiment, subjects responded to close-up views of animal faces or human faces in an attempt to limit physical differences between image sets. This manipulation almost completely eliminated small differences before 100 msec in both tasks. But again, despite very similar behavioral performances and short reaction times in both tasks, human faces were associated with earlier ERP differences compared with animal faces. Finally, in both experiments, as an alternative way to determine processing speed, we compared the ERP with the same images when seen as targets and nontargets in different tasks. Surprisingly, all task-dependent ERP differences had relatively long latencies. We conclude that task-dependent ERP differences fail to capture object processing speed, at least for some categories like faces. We discuss models of object processing that might explain our results, as well as alternative approaches

    Object recognition in congruent and incongruent natural scenes: A life-span study.

    Get PDF
    International audienceEfficient processing of our complex visual environment is essential and many daily visual tasks rely on accurate and fast object recognition. It is therefore important to evaluate how object recognition performance evolves during the course of adulthood. Surprisingly, this ability has not yet been investigated in the aged population, although several neuroimaging studies have reported altered activity in high-level visual ventral regions when elderly subjects process natural stimuli. In the present study, color photographs of various objects embedded in contextual scenes were used to assess object categorization performance in 97 participants aged from 20 to 91. Objects were either animals or pieces of furniture, embedded in either congruent or incongruent contexts. In every age group, subjects showed reduced categorization performance, both in terms of accuracy and speed, when objects were seen in incongruent vs. congruent contexts. In subjects over 60years old, object categorization was greatly slowed down when compared to young and middle-aged subjects. Moreover, subjects over 75years old evidenced a significant decrease in categorization accuracy when objects were seen in incongruent contexts. This indicates that incongruence of the scene may be particularly disturbing in late adulthood, therefore impairing object recognition. Our results suggest that daily visual processing of complex natural environments may be less efficient with age, which might impact performance in everyday visual tasks
    corecore